Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
J Med Internet Res ; 23(12): e20028, 2021 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-34860667

RESUMO

BACKGROUND: The National Cancer Institute Informatics Technology for Cancer Research (ITCR) program provides a series of funding mechanisms to create an ecosystem of open-source software (OSS) that serves the needs of cancer research. As the ITCR ecosystem substantially grows, it faces the challenge of the long-term sustainability of the software being developed by ITCR grantees. To address this challenge, the ITCR sustainability and industry partnership working group (SIP-WG) was convened in 2019. OBJECTIVE: The charter of the SIP-WG is to investigate options to enhance the long-term sustainability of the OSS being developed by ITCR, in part by developing a collection of business model archetypes that can serve as sustainability plans for ITCR OSS development initiatives. The working group assembled models from the ITCR program, from other studies, and from the engagement of its extensive network of relationships with other organizations (eg, Chan Zuckerberg Initiative, Open Source Initiative, and Software Sustainability Institute) in support of this objective. METHODS: This paper reviews the existing sustainability models and describes 10 OSS use cases disseminated by the SIP-WG and others, including 3D Slicer, Bioconductor, Cytoscape, Globus, i2b2 (Informatics for Integrating Biology and the Bedside) and tranSMART, Insight Toolkit, Linux, Observational Health Data Sciences and Informatics tools, R, and REDCap (Research Electronic Data Capture), in 10 sustainability aspects: governance, documentation, code quality, support, ecosystem collaboration, security, legal, finance, marketing, and dependency hygiene. RESULTS: Information available to the public reveals that all 10 OSS have effective governance, comprehensive documentation, high code quality, reliable dependency hygiene, strong user and developer support, and active marketing. These OSS include a variety of licensing models (eg, general public license version 2, general public license version 3, Berkeley Software Distribution, and Apache 3) and financial models (eg, federal research funding, industry and membership support, and commercial support). However, detailed information on ecosystem collaboration and security is not publicly provided by most OSS. CONCLUSIONS: We recommend 6 essential attributes for research software: alignment with unmet scientific needs, a dedicated development team, a vibrant user community, a feasible licensing model, a sustainable financial model, and effective product management. We also stress important actions to be considered in future ITCR activities that involve the discussion of the sustainability and licensing models for ITCR OSS, the establishment of a central library, the allocation of consulting resources to code quality control, ecosystem collaboration, security, and dependency hygiene.


Assuntos
Ecossistema , Neoplasias , Humanos , Informática , Neoplasias/terapia , Pesquisa , Software , Tecnologia
2.
J Med Imaging (Bellingham) ; 3(1): 014503, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26989759

RESUMO

Matching the bolus arrival time (BAT) of the arterial input function (AIF) and tissue residue function (TRF) is necessary for accurate pharmacokinetic (PK) modeling of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We investigated the sensitivity of volume transfer constant ([Formula: see text]) and extravascular extracellular volume fraction ([Formula: see text]) to BAT and compared the results of four automatic BAT measurement methods in characterization of prostate and breast cancers. Variation in delay between AIF and TRF resulted in a monotonous change trend of [Formula: see text] and [Formula: see text] values. The results of automatic BAT estimators for clinical data were all comparable except for one BAT estimation method. Our results indicate that inaccuracies in BAT measurement can lead to variability among DCE-MRI PK model parameters, diminish the quality of model fit, and produce fewer valid voxels in a region of interest. Although the selection of the BAT method did not affect the direction of change in the treatment assessment cohort, we suggest that BAT measurement methods must be used consistently in the course of longitudinal studies to control measurement variability.

3.
Transl Oncol ; 7(1): 153-66, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24772219

RESUMO

Pharmacokinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) time-course data allows estimation of quantitative parameters such as K (trans) (rate constant for plasma/interstitium contrast agent transfer), v e (extravascular extracellular volume fraction), and v p (plasma volume fraction). A plethora of factors in DCE-MRI data acquisition and analysis can affect accuracy and precision of these parameters and, consequently, the utility of quantitative DCE-MRI for assessing therapy response. In this multicenter data analysis challenge, DCE-MRI data acquired at one center from 10 patients with breast cancer before and after the first cycle of neoadjuvant chemotherapy were shared and processed with 12 software tools based on the Tofts model (TM), extended TM, and Shutter-Speed model. Inputs of tumor region of interest definition, pre-contrast T1, and arterial input function were controlled to focus on the variations in parameter value and response prediction capability caused by differences in models and associated algorithms. Considerable parameter variations were observed with the within-subject coefficient of variation (wCV) values for K (trans) and v p being as high as 0.59 and 0.82, respectively. Parameter agreement improved when only algorithms based on the same model were compared, e.g., the K (trans) intraclass correlation coefficient increased to as high as 0.84. Agreement in parameter percentage change was much better than that in absolute parameter value, e.g., the pairwise concordance correlation coefficient improved from 0.047 (for K (trans)) to 0.92 (for K (trans) percentage change) in comparing two TM algorithms. Nearly all algorithms provided good to excellent (univariate logistic regression c-statistic value ranging from 0.8 to 1.0) early prediction of therapy response using the metrics of mean tumor K (trans) and k ep (=K (trans)/v e, intravasation rate constant) after the first therapy cycle and the corresponding percentage changes. The results suggest that the interalgorithm parameter variations are largely systematic, which are not likely to significantly affect the utility of DCE-MRI for assessment of therapy response.

4.
Neuroinformatics ; 12(2): 245-59, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24006207

RESUMO

In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.


Assuntos
Ontologias Biológicas , Encéfalo/anatomia & histologia , Imageamento Tridimensional/métodos , Ferramenta de Busca , Software , Bases de Dados Factuais , Humanos
5.
Sci Rep ; 3: 1364, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23455483

RESUMO

Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer - a free platform for biomedical research - provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61% of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23% and a Hausdorff Distance of 2.32 ± 5.23 mm.


Assuntos
Glioblastoma/diagnóstico , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Carga Tumoral , Glioblastoma/patologia , Humanos , Processamento de Imagem Assistida por Computador
6.
Proc IEEE Int Symp Biomed Imaging ; 2013: 748-751, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25404996

RESUMO

Accurate automated segmentation of brain tumors in MR images is challenging due to overlapping tissue intensity distributions and amorphous tumor shape. However, a clinically viable solution providing precise quantification of tumor and edema volume would enable better pre-operative planning, treatment monitoring and drug development. Our contributions are threefold. First, we design efficient gradient and LBPTOP based texture features which improve classification accuracy over standard intensity features. Second, we extend our texture and intensity features to symmetric texture and symmetric intensity which further improve the accuracy for all tissue classes. Third, we demonstrate further accuracy enhancement by extending our long range features from 100mm to a full 200mm. We assess our brain segmentation technique on 20 patients in the BraTS 2012 dataset. Impact from each contribution is measured and the combination of all the features is shown to yield state-of-the-art accuracy and speed.

7.
Proc SPIE Int Soc Opt Eng ; 8669: 86690H, 2013 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-25075265

RESUMO

This work addresses the challenging problem of parsing 2D radiographs into salient anatomical regions such as the left and right lungs and the heart. We propose the integration of an automatic detection of a constellation of landmarks via rejection cascade classifiers and a learned geometric constellation subset detector model with a multi-object active appearance model (MO-AAM) initialized by the detected landmark constellation subset. Our main contribution is twofold. First, we propose a recovery method for false positive and negative landmarks which allows to handle extreme ranges of anatomical and pathological variability. Specifically we (1) recover false negative (missing) landmarks through the consensus of inferences from subsets of the detected landmarks, and (2) choose one from multiple false positives for the same landmark by learning Gaussian distributions for the relative location of each landmark. Second, we train a MO-AAM using the true landmarks for the detectors and during test, initialize the model using the detected landmarks. Our model fitting allows simultaneous localization of multiple regions by encoding the shape and appearance information of multiple objects in a single model. The integration of landmark detection method and MO-AAM reduces mean distance error of the detected landmarks from 20.0mm to 12.6mm. We assess our method using a database of scout CT scans from 80 subjects with widely varying pathology.

8.
Magn Reson Imaging ; 30(9): 1323-41, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22770690

RESUMO

Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer.


Assuntos
Diagnóstico por Imagem/métodos , Imageamento Tridimensional/métodos , Automação , Biomarcadores/metabolismo , Neoplasias Encefálicas/patologia , Bases de Dados Factuais , Glioblastoma/patologia , Neoplasias de Cabeça e Pescoço/patologia , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Informática Médica/métodos , Tomografia por Emissão de Pósitrons/métodos , Neoplasias da Próstata/patologia , Software , Tomografia Computadorizada por Raios X/métodos
9.
Med Phys ; 39(7): 4245-54, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22830758

RESUMO

PURPOSE: X-ray computed tomography angiography (CTA) is the modality of choice to noninvasively monitor and diagnose heart disease with coronary artery health and stenosis detection being of particular interest. Reliable, clinically relevant coronary artery imaging mandates high spatiotemporal resolution. However, advances in intrinsic scanner spatial resolution (CT scanners are available which combine nearly 900 detector columns with focal spot oversampling) can be tempered by motion blurring, particularly in patients with unstable heartbeats. As a result, recently numerous methods have been devised to improve coronary CTA imaging. Solutions involving hardware, multisector algorithms, or ß-blockers are limited by cost, oversimplifying assumptions about cardiac motion, and populations showing contraindications to drugs, respectively. This work introduces an inexpensive algorithmic solution that retrospectively improves the temporal resolution of coronary CTA without significantly affecting spatial resolution. METHODS: Given the goal of ruling out coronary stenosis, the method focuses on "deblurring" the coronary arteries. The approach makes no assumptions about cardiac motion, can be used on exams acquired at high heart rates (even over 75 beats/min), and draws on a fast and accurate three-dimensional (3D) nonrigid bidirectional labeled point matching approach to estimate the trajectories of the coronary arteries during image acquisition. Motion compensation is achieved by employing a 3D warping of a series of partial reconstructions based on the estimated motion fields. Each of these partial reconstructions is created from data acquired over a short time interval. For brevity, the algorithm "Subphasic Warp and Add" (SWA) reconstruction. RESULTS: The performance of the new motion estimation-compensation approach was evaluated by a systematic observer study conducted using nine human cardiac CTA exams acquired over a range of average heart rates between 68 and 86 beats/min. Algorithm performance was based-lined against exams reconstructed using standard filtered-backprojection (FBP). The study was performed by three experienced reviewers using the American Heart Association's 15-segment model. All vessel segments were evaluated to quantify their viability to allow a clinical diagnosis before and after motion estimation-compensation using SWA. To the best of the authors' knowledge this is the first such observer study to show that an image processing-based software approach can improve the clinical diagnostic value of CTA for coronary artery evaluation. CONCLUSIONS: Results from the observer study show that the SWA method described here can dramatically reduce coronary artery motion and preserve real pathology, without affecting spatial resolution. In particular, the method successfully mitigated motion artifacts in 75% of all initially nondiagnostic coronary artery segments, and in over 45% of the cases this improvement was enough to make a previously nondiagnostic vessel segment clinically diagnostic.


Assuntos
Algoritmos , Artefatos , Angiografia Coronária/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Angiografia Coronária/instrumentação , Humanos , Movimento (Física) , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/instrumentação
10.
Artigo em Inglês | MEDLINE | ID: mdl-21995029

RESUMO

We introduce an automated and probabilistic method for subject-specific segmentation of sheet-like fiber tracts. In addition to clustering of trajectories into anatomically meaningful bundles, the method provides statistics of diffusion measures by establishing point correspondences on the estimated medial representation of each bundle. We also introduce a new approach for medial surface generation of sheet-like fiber bundles in order too initialize the proposed clustering algorithm. Applying the new method to a population study of brain aging on 24 subjects demonstrates the capabilities and strengths of the algorithm in identifying and visualizing spatial patterns of group differences.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/patologia , Processamento de Imagem Assistida por Computador/métodos , Fibras Nervosas Mielinizadas/patologia , Adulto , Idoso , Envelhecimento , Algoritmos , Análise por Conglomerados , Humanos , Funções Verossimilhança , Modelos Estatísticos , Probabilidade
11.
Proc IEEE Int Symp Biomed Imaging ; 2011: 1645-1648, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-30881602

RESUMO

Interactive techniques leverage the expert knowledge of users to produce accurate image segmentations. However, the segmentation accuracy varies with the users. Additionally, users may also require training with the algorithm and its exposed parameters to obtain the best segmentation with minimal effort. Our work combines active learning with interactive segmentation and (i) achieves as good accuracy compared to a fully user guided segmentation but with significantly lower number of user interactions (on average 50%), and (ii) achieves robust segmentation by reducing segmantation variability with user inputs. Our approach interacts with user to suggest gestures or seed point placements. We present extensive experimental evaluation of our results on two different publicly available datasets.

12.
Chest ; 135(6): 1580-1587, 2009 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-19141526

RESUMO

BACKGROUND: Detection of small indeterminate pulmonary nodules (4 to 10 mm in diameter) in clinical practice is increasing, largely because of increased utilization and improved imaging technology. Although there currently exists software for CT scan machines that automate nodule volume estimation, the imprecision associated with volume estimates is particularly poor for nodules < or = 6 mm in diameter, with greater imprecision associated with increasing CT scan slice thickness. This study examined the effects of the volume estimation error associated with four CT scan slice thicknesses (0.625, 1.25, 2.50, and 5.00 mm) on estimates of volume doubling time (VDT) for solid nodules of various sizes. METHODS: Data reflecting the accuracy of 1,624 automated volume estimations were obtained from experiments incorporating volume estimation software, performed on a commercially available lung phantom. These data informed mathematical simulations that were used to estimate imprecision around VDT estimates for hypothetical pairs of volume estimates for a given solid pulmonary nodule observed at different time points. RESULTS: The confidence intervals around the VDT estimates were extremely wide for 2.50- and 5.00-mm slice thicknesses, often encompassing values traditionally associated with both benignity and malignity for simulated 1- and 2-mm growths in diameter. CONCLUSIONS: Because of the inaccuracy in automated volume estimation, the confidence a clinician should have in estimating VDT should be highly dependent on the degree of observed growth and on the CT scan slice thickness. The performance of CT scanners with slice thicknesses of > or = 2.5 mm for assessing growth in pulmonary nodules is essentially inadequate for 1-mm changes in nodule diameter.


Assuntos
Neoplasias Pulmonares/patologia , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/patologia , Tomografia Computadorizada por Raios X/métodos , Intervalos de Confiança , Humanos , Modelos Lineares , Neoplasias Pulmonares/diagnóstico por imagem , Reconhecimento Automatizado de Padrão , Imagens de Fantasmas , Sensibilidade e Especificidade , Carga Tumoral
13.
Med Image Comput Comput Assist Interv ; 12(Pt 1): 239-46, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-20425993

RESUMO

Segmentation of low contrast objects is an important task in clinical applications like lesion analysis and vascular wall remodeling analysis. Several solutions to low contrast segmentation that exploit high-level information have been previously proposed, such as shape priors and generative models. In this work, we incorporate a priori distributions of intensity and low-level image information into a nonparametric dissimilarity measure that defines a local indicator function for the likelihood of belonging to a foreground object. We then integrate the indicator function into a level set formulation for segmenting low contrast structures. We apply the technique to the clinical problem of positive remodeling of the vessel wall in cardiac CT angiography images. We present results on a dataset of twenty five patient scans, showing improvement over conventional gradient-based level sets.


Assuntos
Angiografia/métodos , Inteligência Artificial , Reconhecimento Automatizado de Padrão/métodos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Técnica de Subtração , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
Radiology ; 247(2): 400-8, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-18430874

RESUMO

PURPOSE: To prospectively evaluate in a phantom the effects of reconstruction kernel, field of view (FOV), and section thickness on automated measurements of pulmonary nodule volume. MATERIALS AND METHODS: Spherical and lobulated pulmonary nodules 3-15 mm in diameter were placed in a commercially available lung phantom and scanned by using a 16-section computed tomographic (CT) scanner. Nodule volume (V) was determined by using the diameters of 27 spherical nodules and the mass and density values of 29 lobulated nodules measured by using the formulas V = (4/3)pi r(3) (spherical nodules) and V = 1000 x (M/D) (lobulated nodules) as reference standards, where r is nodule radius; M, nodule mass; and D, wax density. Experiments were performed to evaluate seven reconstruction kernels and the independent effects of FOV and section thickness. Automated nodule volume measurements were performed by using computer-assisted volume measurement software. General linear regression models were used to examine the independent effects of each parameter, with percentage overestimation of volume as the dependent variable of interest. RESULTS: There was no substantial difference in the accuracy of volume estimations across the seven reconstruction kernels. The bone reconstruction kernel was deemed optimal on the basis of the results of a series of statistical analyses and other qualitative findings. Overall, volume accuracy was significantly associated (P < .0001) with larger reference standard-measured nodule diameter. There was substantial overestimation of the volumes of the 3-5-mm nodules measured by using the volume measurement software. Decreasing the FOV facilitated no significant improvement in the precision of lobulated nodule volume measurements. The accuracy of volume estimations--particularly those for small nodules--was significantly (P < .0001) affected by section thickness. CONCLUSION: Substantial, highly variable overestimation of volume occurs with decreasing nodule diameter. A section thickness that enables the acquisition of at least three measurements along the z-axis should be used to measure the volumes of larger pulmonary nodules.


Assuntos
Neoplasias Pulmonares/patologia , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/patologia , Tomografia Computadorizada por Raios X/métodos , Humanos , Técnicas In Vitro , Modelos Lineares , Neoplasias Pulmonares/diagnóstico por imagem , Imagens de Fantasmas , Estudos Prospectivos , Padrões de Referência , Nódulo Pulmonar Solitário/diagnóstico por imagem
15.
Acad Radiol ; 14(11): 1382-8, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17964461

RESUMO

RATIONALE AND OBJECTIVES: To analyze radiologist lung nodule segmentations in the Lung Imaging Database Consortium (LIDC) database and to apply statistical tools to generate estimates of ground truth. This investigation expands on earlier work by considering a larger number of cases from the LIDC database, and results were generated on a per-nodule basis, as opposed to a per-case basis as was done previously. MATERIALS AND METHODS: We analyzed nodule data drawn from the 41 most recent computed tomography exams released by the LIDC. We combined radiologist segmentations for a given nodule using different consensus schemes: union, intersection, and simultaneous truth and performance level estimation (STAPLE). We also generated three-dimensional models of the manual segmentations using discrete marching cubes to visualize features of the data. RESULTS: Using the union as the consensus scheme produced the greatest number of nodule-positive voxels while using the intersection produced the fewest. Considering only nodules for which all readers agreed on nodule presence, STAPLE computed sensitivity averages for readers one, two, three, and four were 0.91, 0.83, 0.90, and 0.77, respectively. Specificity averages were 0.97, 0.98, 0.97, and 0.97. Considering cases for which there was disagreement about nodule presence, sensitivity results become 0.67, 0.74, 0.60, and 0.37. Specificity results in this case are 0.95, 0.95, 0.95, and 0.98. STAPLE-generated pmaps exhibited probability values tightly grouped below the 0.25 and above the 0.75 probability levels. Three-dimensional models of manually segmented nodules revealed step-artefacts in the segmentation data. CONCLUSIONS: Radiologists often disagree about nodule presence. Ideally, knowing each reader's sensitivity and specificity a priori is preferred for optimal STAPLE results. Knowing these values and developing manual segmentation tools and imaging protocols that mitigate unwanted segmentation features (such as step artefacts) can result in more accurate estimates of ground truth. Furthermore, a computer-aided detection algorithm's performance is a function of the ground truth estimate by which it is scored.


Assuntos
Algoritmos , Inteligência Artificial , Bases de Dados Factuais , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Variações Dependentes do Observador , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estados Unidos
16.
Med Image Anal ; 11(5): 443-57, 2007 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17765003

RESUMO

The process of constructing an atlas typically involves selecting one individual from a sample on which to base or root the atlas. If the individual selected is far from the population mean, then the resulting atlas is biased towards this individual. This, in turn, may bias any inferences made with the atlas. Unbiased atlas construction addresses this issue by either basing the atlas on the individual which is the median of the sample or by an iterative technique whereby the atlas converges to the unknown population mean. In this paper, we explore the question of whether a single atlas is appropriate for a given sample or whether there is sufficient image based evidence from which we can infer multiple atlases, each constructed from a subset of the data. We refer to this process as atlas stratification. Essentially, we determine whether the sample, and hence the population, is multi-modal and is best represented by an atlas per mode. In this preliminary work, we use the mean shift algorithm to identify the modes of the sample and multidimensional scaling to visualize the clustering process on clinical MRI neurological image datasets.


Assuntos
Encéfalo/anatomia & histologia , Bases de Dados Factuais , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Imageamento por Ressonância Magnética/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Anatomia Artística/métodos , Inteligência Artificial , Feminino , Humanos , Masculino , Ilustração Médica , Pessoa de Meia-Idade , Modelos Anatômicos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
Inf Process Med Imaging ; 20: 134-46, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17633695

RESUMO

This paper describes a system for detecting pulmonary nodules in CT images. It aims to label individual image voxels in accordance to one of a number of anatomical (pulmonary vessels or junctions), pathological (nodules), or spurious (noise) events. The approach is orthodoxly Bayesian, with particular care taken in the objective establishment of prior probabilities and the incorporation of relevant medical knowledge. We provide, under explicit modeling assumptions, closed-form expressions for all the probability distributions involved. The technique is applied to real data, and we present a discussion of its performance.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Teorema de Bayes , Simulação por Computador , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Modelos Biológicos , Modelos Estatísticos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração
18.
Artigo em Inglês | MEDLINE | ID: mdl-23857521

RESUMO

In this paper, we describe a method for segmenting fiber bundles from diffusion-weighted magnetic resonance images using a locally-constrained region based approach. From a pre-computed optimal path, the algorithm propagates outward capturing only those voxels which are locally connected to the fiber bundle. Rather than attempting to find large numbers of open curves or single fibers, which individually have questionable meaning, this method segments the full fiber bundle region. The strengths of this approach include its ease-of-use, computational speed, and applicability to a wide range of fiber bundles. In this work, we show results for segmenting the cingulum bundle. Finally, we explain how this approach and extensions thereto overcome a major problem that typical region-based flows experience when attempting to segment neural fiber bundles.

19.
Artigo em Inglês | MEDLINE | ID: mdl-17354807

RESUMO

This paper presents a model-based technique for lesion detection in colon CT scans that uses analytical shape models to map the local shape curvature at individual voxels to anatomical labels. Local intensity profiles and curvature information have been previously used for discriminating between simple geometric shapes such as spherical and cylindrical structures. This paper introduces novel analytical shape models for colon-specific anatomy, viz. folds and polyps, built by combining parts with simpler geometric shapes. The models better approximate the actual shapes of relevant anatomical structures while allowing the application of model-based analysis on the simpler model parts. All parameters are derived from the analytical models, resulting in a simple voxel labeling scheme for classifying individual voxels in a CT volume. The algorithm's performance is evaluated against expert-determined ground truth on a database of 42 scans and performance is quantified by free-response receiver-operator curves.


Assuntos
Inteligência Artificial , Pólipos do Colo/diagnóstico por imagem , Colonografia Tomográfica Computadorizada/métodos , Imageamento Tridimensional/métodos , Modelos Biológicos , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Simulação por Computador , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Artigo em Inglês | MEDLINE | ID: mdl-17354808

RESUMO

Lung cancer remains an ongoing problem resulting in substantial deaths in the United States and the world. Within the United states, cancer of the lung and bronchus are the leading causes of fatal malignancy and make up 32% of the cancer deaths among men and 25% of the cancer deaths among women. Five year survival is low, (14%), but recent studies are beginning to provide some hope that we can increase survivability of lung cancer provided that the cancer is caught and treated in early stages. These results motivate revisiting the concept of lung cancer screening using thin slice multidetector computed tomography (MDCT) protocols and automated detection algorithms to facilitate early detection. In this environment, resources to aid Computer Aided Detection (CAD) researchers to rapidly develop and harden detection and diagnostic algorithms may have a significant impact on world health. The National Cancer Institute (NCI) formed the Lung Imaging Database Consortium (LIDC) to establish a resource for detecting, sizing, and characterizing lung nodules. This resource consists of multiple CT chest exams containing lung nodules that seveal radiologists manually countoured and characterized. Consensus on the location of the nodule boundaries, or even on the existence of a nodule at a particular location in the lung was not enforced, and each contour is considered a possible nodule. The researcher is encouraged to develop measures of ground truth to reconcile the multiple radiologist marks. This paper analyzes these marks to determine radiologist agreement and to apply statistical tools to the generation of a nodule ground truth. Features of the resulting consensus and individual markings are analyzed.


Assuntos
Ensaios Clínicos como Assunto , Bases de Dados Factuais , Imageamento Tridimensional/métodos , Sistemas Computadorizados de Registros Médicos/estatística & dados numéricos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/epidemiologia , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/epidemiologia , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estados Unidos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...